Goto

Collaborating Authors

 Mountain View


My Tesla Was Driving Itself Perfectly--Until It Crashed

The Atlantic - Technology

This article was featured in the One Story to Read Today newsletter. T he smell was strange . The concrete wall was too close. One of my kids was standing on the sidewalk next to our car--not crying, just confused. The seat belt had held. The crumple zone had crumpled.



Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion

Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, Honglak Lee

Neural Information Processing Systems

We propose stochastic ensemble value expansion (STEVE), a novel model-based technique that addresses this issue. By dynamically interpolating between model rollouts of various horizon lengths for each individual example, STEVE ensures that the model is only utilized when doing so does not introduce significant errors.


NVIDIA- and Uber-backed Nuro is testing autonomous vehicles in Tokyo

Engadget

The city's narrow streets and brutal traffic will present a'pressure test' for the tech, its CEO said. US self-driving startup Nuro, which is backed by the likes of NVIDIA, Toyota and Uber, has started testing its autonomous vehicles on Tokyo's challenging streets, reported. The company, which plans to launch a robotaxi service with Uber and Lucid in San Francisco this year, will be testing a handful of vehicles in the city. Human safety drivers will be at the wheel, as is required by Japanese law. Tokyo presents a challenge for autonomous vehicles, given its narrow, crowded streets and left side of the road driving.





Quantifying and Attributing Submodel Uncertainty in Stochastic Simulation Models and Digital Twins

Ghasemloo, Mohammadmahdi, Eckman, David J., Li, Yaxian

arXiv.org Machine Learning

Stochastic simulation is widely used to study complex systems composed of various interconnected subprocesses, such as input processes, routing and control logic, optimization routines, and data-driven decision modules. In practice, these subprocesses may be inherently unknown or too computationally intensive to directly embed in the simulation model. Replacing these elements with estimated or learned approximations introduces a form of epistemic uncertainty that we refer to as submodel uncertainty. This paper investigates how submodel uncertainty affects the estimation of system performance metrics. We develop a framework for quantifying submodel uncertainty in stochastic simulation models and extend the framework to digital-twin settings, where simulation experiments are repeatedly conducted with the model initialized from observed system states. Building on approaches from input uncertainty analysis, we leverage bootstrapping and Bayesian model averaging to construct quantile-based confidence or credible intervals for key performance indicators. We propose a tree-based method that decomposes total output variability and attributes uncertainty to individual submodels in the form of importance scores. The proposed framework is model-agnostic and accommodates both parametric and nonparametric submodels under frequentist and Bayesian modeling paradigms. A synthetic numerical experiment and a more realistic digital-twin simulation of a contact center illustrate the importance of understanding how and how much individual submodels contribute to overall uncertainty.